Credit assignment in AI is about working out which rule or parameter should be given the credit (or blame) when something works well (or badly). This is important to enable the system to learn, by reinforcing good rules. However, this can be hard when several rules are involved, hence one of the reasons for explanation-based learning. It can be especially difficult in reinforcement learning as the reward may occur sometime after the critical action with other actions in between. For example, if you take a wrong turn early in a journey it may be some time before you realise, but the last few navigation chocies may have been reasonable – it is the first choice that should be given the 'credit' ( this case approbation).
Used in Chap. 5: page 70; Chap. 16: page 242
Also known as credit assignment problem